![]() BILATERAL FILTERS IN VIDEO ENCODING WITH REDUCED COMPLEXITY
专利摘要:
an illustrative method of filtering a reconstructed block of video data includes obtaining, by one or more processors, reconstructed samples of a current block of video data; and selectively filter bilaterally, by one or more processors, the reconstructed samples from the current block to generate a current filtered block. in this example, selectively bilaterally filtering the reconstructed samples from the current block comprises refraining from bilaterally filtering at least one reconstructed sample from the current block so that the current filtered block includes at least one bilaterally unfiltered sample. 公开号:BR112019015106A2 申请号:R112019015106-0 申请日:2018-01-25 公开日:2020-03-10 发明作者:Zhang Li;Chen Jianle;Chien Wei-Jung;Karczewicz Marta 申请人:Qualcomm Incorporated; IPC主号:
专利说明:
BILATERAL FILTERS IN REDUCED COMPLEXITY VIDEO CODING [001] This Patent Application claims benefit for US Provisional Patent Application No. 62 / 451,555 filed on January 27, 2017, the entire content of which is incorporated by reference into this document . TECHNICAL FIELD [002] This disclosure is related to video encoding. BACKGROUND [003] Digital video capabilities can be incorporated into a wide variety of devices, including digital televisions, direct digital broadcast systems, non-wired broadcast systems, personal digital assistants (PDAs), laptop or desktop computers, tablet computers , e-book readers, digital cameras, digital recording devices, digital media players, game devices, game consoles, radio satellite or cell phones, so-called smartphones, video teleconferencing devices, streaming devices video, among others. Digital video devices implement video compression techniques, such as those described in the standards defined by MPEG-2, MPEG-4, ITU-T H.263, ITUT H.264 / MPEG-4, Part 10, Advanced Video Coding (AVC), the ITU-T H.265 standard, High Efficiency Video Coding (HEVC) and extensions of such standards. Video devices can transmit, receive, encode, decode and / or store digital video information more efficiently by implementing such video compression techniques. Petition 870190069676, of 07/23/2019, p. 7/101 2/71 [004] Video compression techniques perform spatial prediction (between images) and / or temporal prediction (between images) to reduce or remove the redundancy inherent in video sequences. For block-based video encoding, a video fraction (that is, a video frame or part of a video frame) can be divided into video blocks, which can also be called tree blocks, units of video. coding (CUs) and / or coding nodes. Video blocks in an intra (I) coding fraction of a fraction of an image are encoded using spatial prediction in relation to reference samples in neighboring blocks in the same image. Video blocks in a fraction of an inter coded image (P or B) can use spatial prediction in relation to reference samples in neighboring blocks in the same image or temporal prediction in relation to reference samples in other reference images. Spatial or temporal prediction results in a predictive block for a block to be coded. Residual data represents pixel differences between the original block to be encoded and the predictive block. An inter coded block is coded according to a motion vector that points to a block of reference samples that form the predictive block, and the residual data indicating the difference between the coded block and the predictive block. An intra coded block is coded according to an intra coding mode and the residual data. For additional compression, residual data can be transformed from the pixel domain to a transform domain, resulting in coefficients of Petition 870190069676, of 07/23/2019, p. 8/101 3/71 residual transforms, which can then be quantized. SUMMARY [005] In general, this disclosure describes filtering techniques that can be used in a post-processing stage, as part of in-loop encoding, or in a video encoding prediction stage. The filtering techniques of this development can be applied to existing video codecs, such as High Efficiency Video Coding (HEVC), or be an efficient encoding tool in any future video coding standards. [006] In one example, a method of filtering a reconstructed block of video data includes obtaining, by one or more processors, reconstructed samples of a current block of video data; and selectively filter, by one or more processors, the reconstructed samples from the current block to generate a current filtered block. In this example, selectively filtering the reconstructed samples from the current block comprises avoiding filtering at least one reconstructed sample from the current block so that the current filtered block includes at least one unfiltered sample and at least one filtered sample. [007] In another example, an apparatus for filtering a reconstructed block of video data includes a memory configured to store video data; and one or more processors. In this example, the one or more processors are configured to obtain reconstructed samples from a current block of video data; and selectively filter the reconstructed samples from the current block to generate a current filtered block. In this example, to selectively filter Petition 870190069676, of 07/23/2019, p. 9/101 4/71 the reconstructed samples from the current block, the one or more processors are configured to avoid filtering at least one reconstructed sample from the current block, so that the current filtered block includes at least one unfiltered sample and at least one sample filtered. [008] In another example, an apparatus for filtering a reconstructed block of video data includes a means to obtain reconstructed samples of a current block of video data; and means for selectively filtering the reconstructed samples from the current block to generate a current filtered block. In this example, the means for selectively filtering reconstructed samples from the current block is configured to avoid filtering at least one reconstructed sample from the current block, so that the current filtered block includes at least one unfiltered sample and at least one filtered sample. . [009] In another example, a computer-readable storage medium stores instructions that, when executed, cause one or more processors of a device to filter a reconstructed block of video data to obtain reconstructed samples of a current block of data of video; and selectively filter the reconstructed samples from the current block to generate a current filtered block. In this example, instructions that cause one or more processors to selectively filter the reconstructed samples from the current block comprise instructions that cause one or more processors to avoid filtering at least one reconstructed sample from the current block, so that the current filtered block includes at least one unfiltered sample and at least one filtered sample. Petition 870190069676, of 07/23/2019, p. 10/101 5/71 [0010] Details of one or more aspects of the disclosure are set out in the accompanying drawings and in the description below. Other characteristics, objectives and advantages of the techniques described in this disclosure will be evident from the description, drawings and claims. BRIEF DESCRIPTION OF THE DRAWINGS [0011] FIG. 1 is a block diagram illustrating an illustrative video encoding and decoding system that can use one or more techniques described in this disclosure. [0012] FIG. 2 is a block diagram illustrating an illustrative video encoder that can implement the techniques described in this disclosure. [0013] FIG. 3 is a conceptual diagram illustrating a typical example of intra prediction for a 16 x 16 image block. [0014] FIGS. 4A and 4B are conceptual diagrams illustrating examples of Intra prediction modes. [0015] FIGS. 5A through 5D each illustrate a 1-D directional pattern for the Edge Shift sample classification. [0016] FIG. 6 is a conceptual diagram illustrating a current block that includes a current sample and neighboring samples used in the bilateral filtering process of the current sample. [0017] FIG. 7 is a conceptual diagram illustrating how neighboring samples within a current TU (for example, a 4 x 4 TU) can be used to filter a current sample. Petition 870190069676, of 07/23/2019, p. 10/111 6/71 [0018] FIG. 8 is a conceptual diagram illustrating an example of how samples can be categorized, according to one or more techniques of the present disclosure. [0019] FIG. 9 is a block diagram illustrating an illustrative video decoder that can implement one or more techniques described in this disclosure. [0020] FIG. 10 is a flow chart illustrating an illustrative process for filtering a reconstructed block of video data, according to one or more techniques of this disclosure. DETAILED DESCRIPTION [0021] Video encoders (for example, video encoders and video decoders) can perform various filtering operations on video data. For example, to preserve edges and reduce noise, a video decoder can perform bilateral filtering on a sample of video data, by replacing the sample with a weighted average of itself and its neighbors. [0022] It may generally be desirable for a video encoder to be able to process multiple blocks of video data in parallel. For example, a video decoder can reconstruct and filter samples from multiple blocks of video data at the same time. By processing multiple blocks of video data in parallel, a video encoder can reduce the amount of time required to decode video data images. However, in some cases, it may not be possible to process some blocks of video data in parallel. For example, if decoding and / or reconstructing samples Petition 870190069676, of 07/23/2019, p. 10/121 7/71 of a current block depends on filtered samples from a neighboring block, it can decrease productivity, since the decoding and / or reconstruction of samples from the current block needs to wait until the filtering process of the neighboring block is finished. [0023] According to one or more techniques of this disclosure, a video encoder can selectively filter samples from a current block so that the filtering does not prevent parallel processing of neighboring blocks. For example, a video decoder can bilaterally filter samples from a current block that may not be used by neighboring blocks for Intra prediction and refrain from bilaterally filtering samples from a current block that can be used by neighboring blocks for Intra prediction. In this way, a video encoder can still obtain some of the benefits of filtering while still being able to process neighboring blocks in parallel. [0024] FIG. 1 is a block diagram illustrating an illustrative video encoding and decoding system 10 that can use techniques of this disclosure. As shown in FIG. 1, the system 10 includes a source device 12 that provides encoded video data to be decoded later by a destination device 14. In particular, the source device 12 provides the video data to the destination device 14 via a readable medium by computer. Source device 12 and target device 14 can comprise any of a variety of devices, including desktop computers, notebook computers (i.e., laptop computers), tablet computers, set-top boxes, Petition 870190069676, of 07/23/2019, p. 10/13 8/71 handset phones such as smartphones, tablet computers, televisions, cameras, video devices, digital media players, game consoles, video streaming devices, among others. In some cases, the source device 12 and the destination device 14 can be equipped for non-wired communication. Thus, the source device 12 and the destination device 14 can be non-wired communication devices. The source device 12 is an illustrative video encoding device (i.e., a device for encoding video data). The target device 14 is an illustrative video decoding device (i.e., a device for decoding video data). [0025] In the example of FIG. 1, the source device 12 includes a video source 18, the storage media 19 configured to store video data, a video encoder 20 and an output interface 22. The target device 14 includes an input interface 26, a medium storage device 28 configured to store encoded video data, a video decoder 30 and video device 32. In other examples, source device 12 and target device 14 include other components or arrangements. For example, source device 12 can receive video data from an external video source, such as an external camera. Likewise, target device 14 can interface with an external video device, instead of including an integrated video device. [0026] The system illustrated 10 of FIG. 1 is merely an example. Techniques for processing data Petition 870190069676, of 07/23/2019, p. 10/141 9/71 video can be performed by any digital video encoding and / or decoding device. While the techniques of this disclosure are generally performed by a video encoding device, the techniques can also be performed by a video encoder / decoder, typically referred to as a CODEC. The source device 12 and the target device 14 are merely examples of such encoding devices in which the source device 12 generates encoded video data for transmission to the target device 14. In some examples, the source device 12 and the target device 14 may operate in a substantially symmetrical manner so that each source device 12 and destination device 14 includes video encoding and decoding components. Thus, system 10 can support unidirectional or bidirectional video transmission between source device 12 and destination device 14, for example, for video streaming, video playback, video broadcasting or video telephony. [0027] The video source 18 of the source device 12 may include a video capture device, such as a video camera, a video file containing previously captured video and / or a video feed interface for receiving video data from a video content provider. As an additional alternative, video source 18 can generate computer-based data such as the source video, or a combination of live video, archived video and computer generated video. Source device 12 may comprise one or more data storage media (for example, media Petition 870190069676, of 07/23/2019, p. 10/151 10/71 storage 19) configured to store the video data. The techniques described in this disclosure may be applicable to video encoding in general and can be applied to non-wired and / or wired applications. In each case, the captured, pre-captured or computer generated video can be encoded by the video encoder 20. The output interface 22 can output the encoded video information to a computer-readable medium 16. [0028] The output interface 22 can comprise several types of components or devices. For example, output interface 22 may comprise an unwired transmitter, a modem, a wired network component (for example, an Ethernet card) or other physical component. In examples where output interface 22 comprises an unwired receiver, output interface 22 can be configured to receive data, such as bitstream, modulated according to a cellular communication standard, such as 4G, 4G-LTE, LTE Advanced, 5G, among others. In some examples where output interface 22 comprises an unwired receiver, output interface 22 can be configured to receive data, such as bitstream, modulated according to other un-wired standards, such as an IEEE 802.11 specification, an IEEE 802.15 specification (for example, ZigBee ™), a Bluetooth ™ standard, among others. In some examples, the circuitry of the output interface 22 may be integrated into the circuitry of the video encoder 20 and / or other components of the source device 12. For example, the video encoder 20 and the output interface 22 they can be parts of a system on a chip (SoC). SoC can also Petition 870190069676, of 07/23/2019, p. 10/161 11/71 include other components, such as a general purpose microprocessor, a graphics processing unit and so on. [0029] The target device 14 can receive the encoded video data to be decoded via the computer-readable medium 16. The computer-readable medium 16 can comprise any type of medium or device capable of moving the encoded video data from the source device 12 to target device 14. In some instances, the computer-readable medium 16 comprises a communication means to enable source device 12 to transmit encoded video data directly to target device 14 in real time. The encoded video data can be modulated according to a communication standard, such as an unwired communication protocol, and transmitted to the destination device 14. The communication medium can comprise any non-wired or wired communication medium, such as a radio frequency (RF) spectrum or one or more physical transmission lines. The communication medium can be part of a packet-based network, such as a local area network, a wide area network, or a global network, such as the Internet. The communication medium may include routers, switches, base stations or any other equipment that may be useful to facilitate communication from the source device 12 to the destination device 14. The destination device 14 may comprise one or more data storage media configured to store encoded video data and decoded video data. Petition 870190069676, of 07/23/2019, p. 10/171 12/71 [0030] In some examples, encrypted data can be output from the output interface 22 to a storage device. Likewise, encrypted data can be accessed from the storage device via the input interface. The storage device can include any of a variety of data storage media distributed or accessed locally, such as a hard drive, Blu-ray discs, DVDs, CD-ROMs, flash memory, volatile or non-volatile memory or any other media digital storage suitable for storing encoded video data. In an additional example, the storage device can correspond to a file server or another intermediate storage device that can store the encoded video generated by the source device 12. The destination device 14 can access the stored video data from the storage device via streaming or transfer. The file server can be any type of server capable of storing encoded video data and transmitting that encoded video data to the target device 14. Illustrative file servers include a web server (for example, to a network website), an FTP server, network-attached storage devices (NAS), or a local disk drive. Target device 14 can access encoded video data over any standard data connection, including an Internet connection. This can include a non-wired channel (for example, a Wi-Fi connection), a wired connection (for example, DSL, cable modem, etc.) or a Petition 870190069676, of 07/23/2019, p. 10/181 13/71 combination of both that is suitable for accessing encoded video data stored on a file server. The transmission of encoded video data from the storage device can be a streaming transmission, a transfer transmission or a combination thereof. [0031] The techniques can be applied to video encoding in support of any of a variety of multimedia applications, such as air television broadcasts, cable television broadcasts, satellite television broadcasts, video broadcasts from streaming on the Internet, such as dynamic adaptive streaming via HTTP (DASH), digital video that is encoded on a data storage medium, decoding of digital video stored on a data storage medium or other applications. In some instances, system 10 can be configured to support unidirectional or bidirectional video transmission to support applications such as video streaming, video playback, video broadcasting and / or video telephony. [0032] Computer-readable medium 16 may include temporary media, such as an unwired broadcast or wired network transmission, or storage media (i.e., non-temporary storage media), such as a hard disk, flash drive, compact disc, digital video disc, Blu-ray disc or other computer-readable media. In some examples, a network server (not shown) can receive encoded video data from source device 12 and Petition 870190069676, of 07/23/2019, p. 10/191 14/71 providing the encoded video data to the destination device 14, for example, via network transmission. Similarly, a computing device of a media production facility, such as a disc embossing facility, can receive encoded video data from source device 12 and produce a disc containing the encoded video data. Therefore, it can be understood that the computer-readable medium 16 includes one or more computer-readable media in various ways, in various examples. [0033] The input interface 26 of the target device 14 receives information from a computer-readable medium 16. The information from the computer-readable medium 16 may include syntax information defined by the video encoder 20 of the video encoder 20, which also it is used by the video decoder 30, which includes elements of syntax that describe characteristics and / or processing of blocks and other encoded units, for example, image groups (GOPs). Input interface 26 can comprise various types of components or devices. For example, input interface 26 may comprise an unwired receiver, a modem, a wired network component (e.g., an Ethernet card) or other physical component. In the examples where the input interface 26 comprises an unwired receiver, the input interface 26 can be configured to receive data, such as the bit stream, modulated according to a cellular communication standard, such as 4G, 4G-LTE, Advanced LTE, 5G, among others. In some examples where input interface 26 comprises an unwired receiver, Petition 870190069676, of 07/23/2019, p. 10/20 15/71 the input interface 26 can be configured to receive data, such as the bit stream, modulated according to other non-wired standards, such as an IEEE 802.11 specification, an IEEE 802.15 specification (for example, ZigBee ™), a Bluetooth ™ standard, among others. In some examples, the circuitry of the input interface 26 can be integrated into the circuitry of the video decoder 30 and / or other components of the target device 14. For example, the video decoder 30 and the input interface 26 they can be part of a SoC. The SoC can also include other components, such as a general purpose microprocessor, a graphics processing unit, and so on. [0034] The storage medium 28 can be configured to store encoded video data, such as encoded video data (e.g., a bit stream) received by the input interface 26. The video device 32 displays the video data decoded for a user and can comprise any of a variety of video devices, such as a cathode ray tube (CRT), a liquid crystal video (LCD), a video plasma, a video in organic diode emitter of light (OLED) or other type in device video. [0035] 0 encoder of video 20 and the unit video decoder 30 can each be implemented as any one of several suitable encoder circuit systems, such as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), Petition 870190069676, of 07/23/2019, p. 10/21 16/71 field programmable port arrangements (FPGAs), discrete logic, software, hardware, firmware or any combination thereof. When the techniques are partially implemented in software, a device can store instructions for the software in a suitable non-temporary, computer-readable medium and execute the instructions in hardware using one or more processors to perform the techniques of this disclosure. Each video encoder 20 and video decoder 30 can be included in one or more encoders or decoders, any of which can be integrated as part of a combined encoder / decoder (CODEC) in a respective device. [0036] In some examples, video encoder 20 and video decoder 30 may operate according to a video encoding standard, such as an existing or future standard. Illustrative video encoding standards include, but are not limited to, ITUT H.261, ISO / IEC MPEG-1 Visual, ITU-T H.262 or ISO / IEC MPEG-2 Visual, ITU-T H.263, ISO / IEC MPEG-4 Visual and ITU-T H.264 (also known as ISO / IEC MPEG-4 AVC), including the Scalable Video Coding (SVC) and Multi-View Video Coding (MVC) extensions. Additionally, a new video encoding standard, called High Efficiency Video Coding (HEVC) or ITU-T Η.265, including its variations and extensions of screen content encoding, 3D video encoding (3D-HEVC) and extensions of multiview (MVHEVC) and scalable extension (SHVC), were developed by the Joint Collaboration Team on Video Coding (JCT-VC), as well as by the Joint Collaboration Team in the Petition 870190069676, of 07/23/2019, p. 10/22 17/71 Development of 3D Video Coding Extension (JCT-3V) from the ITU-T Video Coding Specialists Group (VCEG) and the ISO / IEC Moving Image Experts Group (MPEG). Ye-Kui Wang et al., High Efficiency Video Coding (HEVC) Defect Report, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO / IEC JTC 1 / SC 29 / WG 11, 14 th Meeting, Vienna, AT, 25 July - 2 Aug 2013, document JCTVC-N1003_vl, is a preliminary HEVC specification. [0037] 0 ITU-T VCEG (q6 / 16) and ISO / IEC MPEG (JTC 1 / SC 29 / WG) are now studying the potential need for standardization of future video encoding technology with a compression capacity that significantly exceeds that of the current HEVC standard (including its current extensions and short-term extensions for encoding screen content and high dynamic range encoding). The groups are working together on this exploration activity in a joint collaborative effort known as the Joint Video Exploration Team (JVET) to evaluate the comparison technology projects proposed by their experts in this area. JVET first met during October 19-21, 2015. Juale Chem et al., Algorithm Descrition of Joint Exploration Test Model 3, Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO / IEC JTC 1 / SC 29 / WG 11, 3 rd Meeting, Geneva, CH, 2 6 May - 1 June 2016, document JVET-1001, is an algorithmic description of the Joint Exploration Test Model 3 (JEM3). [0038] In HEVC and other video encoding specifications, video data includes several Petition 870190069676, of 07/23/2019, p. 10/23 18/71 images. Images can also be referred to as frames. An image can include one or more sample arrangements. Each respective sample arrangement of an image can comprise a sample arrangement for a respective color component. In HEVC, an image can include three sample arrangements, denoted Sl, Set and Scr. Sl is a two-dimensional array (ie, a block) of luma samples. Scb is a two-dimensional array of Cb chroma samples. Scr is a two-dimensional array of Cr chroma samples. In other cases, an image may be monochrome and may include only an array of luma samples. [0039] As part of encoding video data, video encoder 20 can encode images of video data. In other words, the video encoder 20 can generate encoded representations of the images of the video data. An encoded representation of an image can be referred to in this document as an encoded image or an encrypted encoded image. [0040] To generate an encoded representation of an image, video encoder 20 can encode blocks of the image. The video encoder 20 may include, in a bit stream, an encoded representation of the video block. For example, to generate an encoded representation of an image, the video encoder 20 can divide each array of image samples into three tree blocks (CTBs) and encode CTBs. A CTB can be a block of N x N samples in a sample arrangement of an image. In the main HEVC profile, the size of a CTB can vary from 16x16 to 64x64, although technically Petition 870190069676, of 07/23/2019, p. 10/24 19/71 8x8 CTB sizes can be supported. [0041] An encoding tree unit (CTU) of an image may comprise one or more CTBs and may comprise syntax structures used to encode samples from one or more CTBs. For example, each CTU may comprise a CTB of luma samples, two corresponding CTBs of chroma samples and syntax structures used to encode the CTB samples. In monochrome images or images having three separate color planes, a CTU can comprise structures from a single CTB and syntax structures used to encode the CTB samples. A CTU can also be referred to as a tree block or a larger coding unit (LCU). In this disclosure, a syntax structure can be defined as zero or more elements of syntax present together in a bit stream in a specified order. In some codecs, an encoded image is an encoded representation containing all CTUs in the image. [0042] To encode a CTU of an image, the video encoder 20 can divide the CTBs of the CTU into one or more encoding blocks. A coding block is an N x N sample block. In some codecs, to encode a CTU of an image, video encoder 20 can recursively perform quadtree transform division with respect to the coding tree blocks of a CTU to divide in part the CTBs in coding blocks, hence the name coding tree units. A coding unit (CU) can comprise one or more coding blocks and structures Petition 870190069676, of 07/23/2019, p. 10/25 20/71 of syntax used to code samples from one or more coding blocks. For example, a CU can include a luma sample coding block and two corresponding chroma sample coding blocks of an image that has a luma sample array, a Cb sample array and a Cr sample array, and the syntax used to code the samples of the coding blocks. In monochrome images or images having three separate color planes, a CU can comprise a single coding block and syntax structures used to encode the samples in the coding block. [0043] Additionally, the video encoder 20 can encode the CUs of an image of the video data. In some codecs, as part of encoding a CU, video encoder 20 may split a CU encoding block into one or more prediction blocks. A prediction block is a rectangular block (that is, square or non-square) of samples to which the same prediction is applied. A CU prediction unit (PU) may comprise one or more CU prediction blocks and syntax structures used to predict the one or more prediction blocks. For example, a PU may comprise a luma sample prediction block, two corresponding chroma sample prediction blocks and syntax structures used to predict the prediction blocks. In monochrome images or images having three separate color planes, a PU can include a single prediction block and syntax structures used to predict the prediction block. Petition 870190069676, of 07/23/2019, p. 10/26 21/71 [0044] The video encoder 20 can generate a predictive block (for example, a luma predictive block, Cb and Cr) for a prediction block (for example, luma predictive block, Cb and Cr) of a CU. Video encoder 20 can use intra or inter prediction to generate a predictive block. If the video encoder 20 uses intra prediction to generate a predictive block, the video encoder 20 can generate the predictive block based on decoded samples of the image that includes the CU. If video encoder 20 uses inter prediction to generate a predictive block from a CU of a current image, video encoder 20 can generate the predictive block from CU based on decoded samples of a reference image (i.e. a different image from the image current). [0045] The video encoder 20 can generate one or more residual blocks for the CU. For example, video encoder 20 can generate a luma residual block for the CU. Each sample in the CU luma residual block indicates a difference between a luma sample in one of CU's predictive luma blocks and a corresponding sample in the original CU luma coding block. In addition, the video encoder 20 can generate a residual block Cb for the CU. Each sample in the residual block Cb of a CU can indicate a difference between a sample Cb in one of the predictive blocks Cb of the CU and a corresponding sample in the original coding block Cb of the CU. The video encoder 20 can also generate a residual block Cr for the CU. Each sample in the CU residual block Cr can indicate a difference between a Cr sample in one of CU's predictive Cr blocks and a Petition 870190069676, of 07/23/2019, p. 10/271 22/71 corresponding sample in CU's original Cr coding block. [0046] Additionally, the video encoder 20 can decompose the residual blocks of a CU into one or more transform blocks. For example, video encoder 20 can use quad-tree transform division to decompose the residual blocks of a CU into one or more transform blocks. A transform block is a rectangular block (for example, square or non-square) of samples to which the same transform is applied. A transform unit (TU) of a CU can comprise one or more transform blocks. For example, a TU may comprise a luma sample transform block, two corresponding chroma sample transform blocks, and syntax structures used to transform the transform block samples. Thus, each CU of a CU can have a luma transform block, a Cb transform block and a Cr transform block. The TU luma transform block can be a sub-block of the CU luma residual block. The transform block Cb can be a sub-block of the residual block Cb of CU. The transform block Cr can be a sub-block of the residual block Cr of CU. In monochrome images or images with three separate color planes, a TU can comprise a single transform block and syntax structures used to transform the samples in the transform block. [0047] The video encoder 20 can apply one or more transformations in a transform block of a TU to generate a block of coefficients for the TU. A block of coefficients can be a two-dimensional arrangement of Petition 870190069676, of 07/23/2019, p. 10/28 23/71 transform coefficients. A transform coefficient can be a scalar quantity. In some examples, one or more transformations convert the transform block from a pixel domain to a frequency domain. Thus, in such examples, a transform coefficient can be a scalar quantity considered to be in a frequency domain. A transform coefficient level is an integer amount representing a value associated with a specific two-dimensional frequency index in a decoding process before scaling to calculate a transform coefficient value. [0048] In some examples, the video encoder 20 skips the application of the transforms to the transform block. In such examples, the video encoder 20 can handle the values of residual samples, which can be treated in the same way as the transform coefficients. Thus, in examples where the video encoder 20 skips the application of the transforms, the following discussion of the transform coefficients and the coefficient blocks may be applicable for the residual sample transform blocks. [0049] After generating a block of coefficients, the video encoder 20 can quantize the block of coefficients. Quantization generally refers to a process in which the transform coefficients are quantized to possibly reduce the amount of data used to represent the transform coefficients, providing additional compression. In some examples, video encoder 20 skips the Petition 870190069676, of 07/23/2019, p. 10/29 24/71 quantization. After the video encoder 20 quantizes a block of coefficients, the video encoder 20 can generate syntax elements indicating the quantized transform coefficients. The video encoder 20 can entropy encode one or more of the syntax elements indicating the quantized transform coefficients. For example, video encoder 20 can perform Context Adaptive Binary Arithmetic Coding (CABAC) on the syntax elements indicating the quantized transform coefficients. Thus, an encoded block (e.g., an encoded CU) can include the entropy-encoded syntax elements that indicate the quantized transform coefficients. [0050] Video encoder 20 can produce a bit stream that includes encoded video data. In other words, the video encoder 20 can output a bit stream that includes an encoded representation of video data. For example, the bit stream may comprise a sequence of bits that form a representation of encoded images of the video data and associated data. In some examples, a representation of an encoded image may include encoded representations of blocks. [0051] The bit stream may comprise a sequence of network abstraction layer (NAL) units. An NAL unit is a syntax structure containing an indication of the type of data in the NAL unit and the bytes containing that data in the form of a gross byte sequence payload (RBSP) interspersed as needed with emulation prevention bits. Each of the units Petition 870190069676, of 07/23/2019, p. 10/30 25/71 NAL can include an NAL unit header and encapsulate an RBSP. The NAL unit header may include a syntax element indicating an NAL unit type code. The NAL unit type code specified by the NAL unit header of an NAL unit indicates the type of NAL unit. An RBSP can be a syntax structure containing an integer number of bytes that are encapsulated in an NAL unit. In some cases, an RBSP includes zero bits. [0052] NAL units can encapsulate RBSPs for video parameter sets (VPSs), sequence parameter sets (SPSs) and image parameter sets (PPSs). A VPS is a syntax structure that comprises syntax elements that apply to zero or more entire encoded video sequences (CVSs). An SPS is also a syntax structure that comprises elements of syntax that apply to zero or more entire CVSs. An SPS can include a syntax element that identifies a VPS that is active when the SPS is active. Thus, the syntax elements of a VPS may be more generally applicable than the syntax elements of an SPS. A PPS is a syntax structure comprising elements of syntax that apply to zero or more encoded images. A PPS can include a syntax element that identifies an SPS that is active when the PPS is active. A fraction header for a fraction can include a syntax element that indicates a PPS that is active when the fraction is being encoded. [0053] The video decoder 30 can receive a bit stream generated by the video encoder Petition 870190069676, of 07/23/2019, p. 10/31 26/71 20. As noted above, the bit stream may comprise an encoded representation of video data. The video decoder 30 can decode the bit stream to reconstruct the images from the video data. As part of the bit stream decoding, the video decoder 30 can analyze the bit stream to obtain elements of syntax from the bit stream. The video decoder 30 can reconstruct images of the video data based, at least in part, on the syntax elements obtained from the bit stream. The process for reconstructing images from the video data can generally be reciprocal to the process performed by the video encoder 20 to encode the images. For example, the video decoder 30 can use inter or intra prediction to generate one or more predictive blocks for each PU of the current CU, it can use motion vectors of the PUs to determine predictive blocks for the PUs of a current CU. Additionally, the video decoder 30 can reverse the quantification of coefficient blocks of the current CU's TUs. The video decoder 30 can perform inverse transforms on the coefficient blocks to reconstruct transform blocks of the current CU's TU. In some instances, the video decoder 30 can reconstruct the encoding blocks of the current CU by adding samples from the predictive blocks for the PUs of the current CU to the corresponding decoded samples of the transform blocks of the current CU's TUs. By reconstructing the encoding blocks for each CU of an image, the video decoder 30 can reconstruct the image. [0054] A fraction of an image can include Petition 870190069676, of 07/23/2019, p. 10/32 27/71 an integer number of CTUs in the image. A fraction's CTUs can be ordered consecutively in a scan order, such as a scan-by-scan order. In HEVC, a fraction is defined as an integer number of CTUs contained in an independent fraction segment and in all subsequent dependent fraction segments (if any) that precede the next independent fraction segment (if any) within the same unit. access. Additionally, in HEVC, a fraction segment is defined as an integer number of coding tree units ordered consecutively in the tile scan and contained in a single NAL unit. A tile scan is a specific sequential ordering of CTBs that divide an image into parts where CTBs are sorted consecutively by scanning a tile CTB, while tiles in an image are sorted consecutively in a tile scan. of image. As defined in HEVC and potentially other codecs, a tile is a rectangular region of CTBs within a specific tile column and a specific tile line in an image. Other tile definitions may apply to block types other than CTBs. [0055] Video encoder 20 and / or video decoder 30 can perform various filtering operations on video data. For example, as discussed in more detail below, video decoder 30 can perform bilateral filtering on a sample of video data, replacing the sample with a weighted average of itself and its neighbors. Meantime, Petition 870190069676, of 07/23/2019, p. 10/33 28/71 performing bilateral filtering on samples of a current block can reduce the performance of the video decoder 30, due reconstruction of samples in neighboring blocks of block current can depend on samples unfiltered of block current. [0056] In a deal with an or more techniques this revelation, The encoder in video 20 and The decoder video 30 may selectively filter samples from a current block so that filtering does not prevent parallel processing of neighboring blocks. For example, video decoder 30 can bilaterally filter samples from a current block that can be used by neighboring blocks for intra prediction and refrain from bilaterally filtering samples from a current block that may not be used by neighboring blocks for intra prediction. In this way, the video decoder 20 and the video decoder 30 can still obtain some of the benefits of filtering while still being able to process neighboring blocks in parallel. [0057] FIG. 2 is a block diagram illustrating an illustrative video encoder 200 that can perform the techniques of this disclosure. Video encoder 200 represents an example of video encoder 20 of FIG. 1, although other examples are possible. FIG. 2 is provided for explanatory purposes and should not be considered as limiting the techniques as widely exemplified and described in this disclosure. For purposes of explanation, this disclosure describes video encoder 200 in the context of video encoding standards such as the video encoding standard. Petition 870190069676, of 07/23/2019, p. 10/34 29/71 HEVC video and the video encoding standard Η. 266 in development. However, the techniques of this disclosure are not limited to these video encoding standards and are generally applicable to video encoding and decoding. [0058] In the example of FIG. 2, video encoder 200 includes video data memory 230, mode selection unit 202, residual generation unit 204, transform processing unit 206, quantization unit 208, inverse quantization unit 210 , the reverse transform processing unit 212, the reconstruction unit 214, the filter unit 216 the decoded image buffer (temporary memory) (DPB) 218 and the entropy coding unit 220. [0059] Video data memory 230 can store video data to be encoded by components of video encoder 200. Video encoder 200 can receive video data stored in video data memory 230 from, for example example, video source 18 (FIG. 1). DPB 218 can act as a reference image memory that stores reference video data for use in predicting subsequent video data by video encoder 200. Video data memory 230 and DPB 218 can be formed by any one of a variety of memory devices, such as dynamic access memory (DRAM), including synchronous DRAM (SDRAM), magnetic-resistive RAM (MRAM), resistive RAM (RRAM) or other types of memory devices. Video data memory 230 and DPB 218 can be provided by the same memory device or by Petition 870190069676, of 07/23/2019, p. 10/35 30/71 separate memory devices. In several examples, the video data memory 230 may be on the chip with other components of the video encoder 200, as illustrated, or off the chip with respect to those components. [0060] In this description, the reference to video data memory 230 should not be interpreted as limiting the internal memory of video encoder 200, unless specifically described as such, or the memory external to video encoder 200, the unless specifically described as such. Instead, the reference to video data memory 230 should be understood as the reference memory that stores video data that video encoder 200 receives for encoding (for example, video data for a current block that must be encoded) ). The video data memory 230 can also provide temporary storage of outputs from the various units of the video encoder 200. [0061] The various units of FIG. 2 are illustrated to assist in understanding the operations performed by the video encoder 200. The units can be implemented as fixed function circuits, programmable circuits or a combination thereof. Fixed-function circuits refer to circuits that provide specific functionality and are predefined in operations that can be performed. Programmable circuits refer to circuits that can be programmed to perform various tasks and provide flexible functionality in the operations that can be performed. For example, programmable circuits can Petition 870190069676, of 07/23/2019, p. 36/101 31/71 run software or firmware that causes programmable circuits to operate in the manner defined by the software or firmware instructions. Fixed-function circuits can execute software instructions (for example, to receive parameters or to emit parameters), but the types of operations that fixed-function circuits perform are generally immutable. In some examples, one or more of the units can be separate circuit blocks (fixed or programmable function), and in some examples, the one or more units can be integrated circuits. [0062] Video encoder 200 may include arithmetic logic units (ALUs), elementary function units (EFUs), digital circuits, analog circuits and / or programmable cores, formed from programmable circuits. In examples where video encoder 200 operations are performed by software executed by programmable circuits, video data memory 230 can store the object code of the software that video encoder 200 receives and executes, or other memory (not shown) can store such instructions. [0063] The video data memory 230 is configured to store received video data. The video encoder 200 can retrieve an image of the video data from the video data memory 230 and provide the video data to the residual generation unit 204 and to the mode selection unit 202. The video data in the video data memory 230 can be raw video data that must be encoded. [0064] The 202 mode selection unit includes Petition 870190069676, of 07/23/2019, p. 37/101 32/71 a motion estimation unit 222, a motion compensation unit 224, and an intra prediction unit 22 6. The mode selection unit 202 may include additional functional units to perform video prediction according to others prediction modes. As examples, mode selection unit 202 may include a pallet unit, an intra-block copy unit (which may be part of motion estimation unit 222 and / or motion compensation unit 224), a affine unit, a linear model unit (LM), among others. [0065] The mode selection unit 202 generally coordinates several coding steps to test combinations of coding parameters and values resulting from rate distortion for such combinations. Coding parameters can include dividing CTUs into CUs, predicting modes for CUs, transform types for residual data from CUs, quantization parameters for residual data from CUs, and so on. The mode selection unit 202 can ultimately select the combination of encoding parameters that have rate distortion values that are better than the other tested combinations. [0066] Video encoder 200 can split an image retrieved from video data memory 230 into a series of CTUs and encapsulate one or more CTUs within a fraction. The mode selection unit 202 can divide a CTU of the image into parts according to a tree structure, such as the QTBT structure or the HEVC quad-tree transformed structure described Petition 870190069676, of 07/23/2019, p. 38/101 33/71 above. As described above, video encoder 200 can form one or more CUs from dividing into parts of a CTU according to the tree structure. Such a CU can also be referred to generally as a video block or block. [0067] In general, the mode selection unit 202 also controls its components (for example, motion estimation unit 222, motion compensation unit 224 and intra prediction unit 22 6) to generate a prediction block for a current block (for example, a current CU, or in HEVC, the overlapped bearing of a PU and a TU). For inter prediction of a current block, motion estimation unit 222 can perform a motion search to identify one or more reference blocks most closely matching in one or more reference images (for example, one or more previously encoded images stored in DPB 218). In particular, the motion estimation unit 222 can calculate a value representative of how similar a potential reference block is to the current block, for example, according to the sum of the absolute difference (SAD), the sum of the quadratic differences (SSD ), the mean absolute difference (MAD), the mean quadratic differences (MSD), among others. The motion estimation unit 222 can generally perform these calculations using the sample by sample differences between the current block and the reference block being considered. The motion estimation unit 222 can identify a reference block having a lower value resulting from these calculations, indicating a reference block that most closely matches. Petition 870190069676, of 07/23/2019, p. 39/101 34/71 approaches the current block. [0068] The motion estimation unit 222 can form one or more motion vectors (MVs) that define the positions of the reference blocks in the reference images in relation to the position of the current block in a current image. The motion estimation unit 222 can then provide the motion vectors for motion compensation unit 224. For example, for inter unidirectional prediction, the motion estimation unit 222 can provide a single motion vector, while for inter prediction bidirectional, motion estimation unit 222 provide two motion vectors. The motion compensation unit 224 can then generate a prediction block using the motion vectors. For example, the motion compensation unit 224 can retrieve data from the reference block using the motion vector. As another example, if the motion vector has fractional sample precision, the motion compensation unit 224 can interpolate values for the prediction block according to one or more interpellation filters. In addition, for inter bidirectional prediction, the motion compensation unit 224 can retrieve data for two reference blocks identified by respective motion vectors and combine the retrieved data, for example, through sample mean by sample or weighted average. [0069] As another example, for intra prediction, or intra prediction coding, the intra prediction unit 22 6 can generate the sample prediction block next to the current block. For example, for modes Petition 870190069676, of 07/23/2019, p. 40/101 35/71 directional, the intra prediction unit 226 can generally mathematically combine values from neighboring samples and fill these calculated values in the direction defined through the current block to produce the prediction block. As another example, for DC mode, the intra-prediction unit 226 can average neighboring samples for the current block and generate the prediction block to include this resulting average for each sample in the prediction block. [0070] FIG. 3 is a conceptual diagram illustrating a typical example of the Intra prediction for a 16x16 image block. As shown in FIG. 3, with Intra prediction, the 16x16 image block (in the heavy dashed square) can be predicted by the neighboring reconstructed samples above and to the left (reference samples) along a selected prediction direction (as indicated by the arrow). [0071] In HEVC, the Intra prediction of a luma block includes 35 modes, including Planar mode, DC mode and 33 angular modes. FIGS. 4A and 4B are conceptual diagrams illustrating examples of Intra prediction modes. In HEVC, after the intra prediction block has been generated for the intra VER (vertical) and HOR (horizontal) modes, the leftmost column and the uppermost line of the prediction samples can be additionally adjusted, respectively. [0072] To capture the finer edge directions presented in natural videos, the intra directional modes are extended from 33, as defined in HEVC, to 65. The new directional modes are represented as Petition 870190069676, of 07/23/2019, p. 41/101 36/71 dashed arrows in FIG. 4B, and the Planar and DC modes remain the same. These denser intra-directional prediction modes apply to all block sizes and intra luma and chroma predictions. [0073] Additionally, the four coefficient intra interpellation filters can be used instead of the two coefficient intra interpellation filters, to generate the intra prediction block that improves the intra directional prediction accuracy. The limit filter in HEVC can be further extended in several intra diagonal modes, and limit samples of up to four columns or rows are additionally adjusted using a two coefficient filter (for intra 2 & 34 mode) or three coefficients (for intra mode) intra 3-6 & 30-33). [0074] The position dependent intra-prediction combination (PDPC) is a post-processing for Intra prediction that invokes a combination of Intra HEVC prediction with unfiltered limit reference samples. In smoothing the adaptive reference sample (ARSS), two filters low pass (LPF) are used for process samples in reference: • LPF in 3 coefficients with coefficients in [1, 2, 1] / 4 • LPF in 5 coefficients with coefficients in [2, 3, 6, 3, 2] / 16 [0075] CCLM is a new chroma prediction method, in which the reconstructed luma blocks and the adjacent chroma block are used to derive the chroma prediction block. Additional information on PDPC, ARSS and CCLM can be found at JVET-D1001, 4th Meeting: Chengdu, CN, Petition 870190069676, of 07/23/2019, p. 42/101 37/71 October 15-21, 2016 (hereafter, JVET-D1001). [0076] The mode selection unit 202 provides the prediction block for the residual generation unit 204. The residual generation unit 204 receives a raw uncoded version of the current block from the video data memory 230 and the block mode selection unit 202 prediction unit. Residual generation unit 204 calculates the sample by sample differences between the current block and the prediction block. The resulting differences from sample to sample define a residual block for the current block. In some examples, the residual generation unit 204 can also determine differences between the sample values in the residual block to generate a residual block using residual differential pulse code (RDPCM) modulation. In some examples, the residual generation unit 204 can be formed using one or more subtraction circuits that perform binary subtraction. [0077] In examples where the 202 mode selection unit divides CUs into PUs, each PU can be associated with a luma prediction unit and with corresponding chroma prediction units. The video encoder 200 and the video decoder 300 can support PUs having various sizes. As indicated above, the size of a CU can refer to the size of the CU's luma coding block and the size of a PU can refer to the size of a PU luma prediction unit. Assuming the size of a given CU is 2N x 2N, video encoder 200 can support PU sizes of 2N x 2N or N x N for intra prediction and symmetric PU sizes of 2N x 2N, 2N x Petition 870190069676, of 07/23/2019, p. 43/101 38/71 N, N x 2N, N x N or similar for inter prediction. The video encoder 20 and the video decoder 30 can also support asymmetric division into PU sizes of 2N x nU, 2N x nD, nL x 2N and nR x 2N for inter prediction. [0078] In examples where the mode selection unit does not further divide a CU into PUs, each CU can be associated with a luma coding block and the corresponding chroma coding blocks. As stated above, the size of a CU can refer to the size of the CU's luma coding block. Video encoder 200 and video decoder 120 can support CU sizes of 2N x 2N, 2N x N or N x 2N. [0079] For other video encoding techniques, such as intra-block copy mode encoding, affine mode encoding and linear model encoding (LM), as some examples, the 202 mode selection unit, via units associated with the coding techniques, generates a prediction block for the current block being encoded. In some examples, such as palette mode encoding, mode selection unit 202 may not generate a prediction block and, instead, generate syntax elements that indicate the way in which to reconstruct the block based on a selected palette . In these modes, the mode selection unit 202 can provide these syntax elements for the entropy coding unit 220 to be encoded. [0080] As described above, the residual generation unit 204 receives the video data for the current block and the corresponding prediction block. The unit of Petition 870190069676, of 07/23/2019, p. 44/101 39/71 residual generation 204 then generates a residual block for the current block. To generate the residual block, the residual generation unit 204 calculates the sample by sample differences between the prediction block and the current block. Thus, [0081] Transform processing unit 206 applies one or more transforms to the residual block to generate a transform coefficient block (referred to in this document as a transform coefficient block). The transform processing unit 206 can apply several transforms to a residual block to form the block of transform coefficients. For example, transform processing unit 206 can apply to a discrete cosine transform (DCT), a directional transform, a Karhunen-Loeve transform (KLT), or a conceptually similar transform for a residual block. In some examples, transform processing unit 206 may perform several transforms for a residual block, for example, a main transform and a secondary transform, such as a rotational transform. In some examples, transform processing unit 206 does not apply transforms to a residual block. [0082] The quantization unit 208 can quantize the transform coefficients in a block of transform coefficients, to produce a block of quantized transform coefficients. The quantization unit 208 can quantize transform coefficients of a block of transform coefficients according to a quantization parameter value (QP) associated with the current block. The video encoder 200 (for example, via Petition 870190069676, of 07/23/2019, p. 45/101 40/71 the mode selection unit 202) can adjust the degree of quantization applied to the coefficient blocks associated with the current block by adjusting the QP value associated with the CU. Quantization can introduce loss of information and, therefore, the quantized transform coefficients may be less accurate than the original transform coefficients produced by the transform processing unit 206. [0083] The inverse quantization unit 210 and the inverse transform processing unit 212 can apply inverse quantization and inverse transforms to a quantized transform coefficient block, respectively, to reconstruct a residual block from the transform coefficient block. The reconstruction unit 214 can produce a reconstructed block corresponding to the current block (albeit potentially with some degree of distortion) based on the reconstructed residual block and a prediction block generated by the mode selection unit 202. For example, the reconstruction unit 214 can add samples from the reconstructed residual block to corresponding samples from the prediction block generated by the mode selection unit 202 to produce the reconstructed block. [0084] Filter unit 216 can perform one or more filter operations on reconstructed blocks. For example, the filter unit 216 can perform ungrouping operations to reduce cluster artifacts along the edges of the CUs. As illustrated by dashed lines, the operations of the filter unit 216 can be skipped in some examples. Petition 870190069676, of 07/23/2019, p. 46/101 41/71 [0085] Video encoder 200 stores reconstructed blocks in DPB 218. For example, in examples where filter unit 216 operations are not required, reconstruction unit 214 can store reconstructed blocks in DPB 218. In examples where filter unit 216 operations are required, filter unit 216 can store the reconstructed filtered blocks next to DPB 218. Motion estimation unit 222 and motion compensation unit 224 can retrieve a reference image from DPB 218, formed from the reconstructed (and potentially filtered) blocks for inter prediction blocks of the subsequent coded images. In addition, the intra 226 prediction unit can use DPB 218 reconstructed blocks from a current frame to predict intra other blocks in the current frame. [0086] In general, entropy coding unit 220 can entropy encode syntax elements received from other functional components of video encoder 200. For example, entropy coding unit 220 can entropy encode blocks of coefficients transform values quantized from the quantization unit 208. As another example, the entropy coding unit 220 can encode elements of prediction syntax (for example, motion information for inter prediction or intra-mode information for intra prediction) from the mode selection unit 202. The entropy coding unit 220 can perform one or more entropy coding operations on the syntax elements, which are another example of Petition 870190069676, of 07/23/2019, p. 47/101 42/71 video data, to generate entropy encoded data. For example, the entropy coding unit 220 can perform a context-adaptable variable-length coding operation (CAVLC), a CABAC operation, a variable-length variable-to-variable coding operation (V2V), an adaptive binary arithmetic coding operation to the syntax-based context (SBAC), an entropy coding operation by Probability Variation Division (PIPE), an Exponential-Golomb coding operation or another type of entropy coding operation in the data. In some examples, the entropy coding unit 220 may operate in bypass mode, where the syntax elements are not entropy encoded. [0087] Video encoder 200 can output a bit stream that includes the entropy-encoded syntax elements necessary to reconstruct blocks of a fraction or image. In particular, the entropy coding unit 220 can produce the bit stream. [0088] The operations described above are described in relation to a block. Such description should be understood as being operations for a luma coding block and / or chroma coding blocks. As described above, in some examples, the luma coding blocks and chroma coding blocks are components of luma and chroma of a CU. In some examples, the luma coding block and the chroma coding blocks are components of luma and chroma of a PU. [0089] In some examples, operations Petition 870190069676, of 07/23/2019, p. 48/101 43/71 performed in relation to a luma coding block do not need to be repeated for chroma coding blocks. As an example, the operations to identify a motion vector (MV) and a reference image for a luma coding block do not need to be repeated to identify a MV and the reference image for chroma blocks. Instead, the MV for the luma coding block can be scaled to determine the MV for the chroma blocks, and the reference image can be the same. As another example, the intra prediction process can be the same for the luma coding blocks and the chroma coding blocks. [0090] As discussed above, the unity in filter 216 may run one or more operations filter in blocks rebuilt. In some examples, such as at the HEVC, the unity filter i 216 can employ two filters in loop, including a filt breakdown (DBF) and one Sample adaptive displacement filter (SAO). [0091] The entry for the ungrouping filter encoding tool is an image reconstructed after the prediction (for example, intra prediction or inter prediction, but other prediction modes are possible). The breakdown filter performs the detection of artifacts at the boundaries of the coded block and attenuates the artifacts by applying a selected filter. As described in Norkin et al., HEVC Deblocking Filter, IEEE Trans. Syst circuits. Video Technol., 22 (12): 1746 -1754 (2012), compared to the H.264 / AVC breakout filter, the HEVC breakdown filter has less computational complexity and better processing capabilities Petition 870190069676, of 07/23/2019, p. 49/101 44/71 parallel, while still obtaining significant reduction of the visual artifacts. [00 92] The input to the SAO filter is a reconstructed image after invoking the unbundling filter. The concept of SAO is to reduce the average sample distortion of a region by first classifying samples from the region into various categories with a selected classifier, obtaining an offset for each category and then adding the offset to each sample in the category, where the index of the classifier and region offsets are encoded in the bit stream. In HEVC, the region (the unit for signaling SAO parameters) is defined as a coding tree unit (CTU). Two types of SAO that can satisfy low complexity requirements are adopted in HEVC: edge shift (EO) and band shift (BO). An index of the SAO type is coded (which is in the range of [0, 2]). [0093] For EO, the sample classification is based on the comparison between current samples and neighboring samples according to one-dimensional directional patterns (1-D): horizontal, vertical, 135 ° diagonal and 45 ° diagonal. FIGS. 5A through 5D each illustrate a 1-D directional pattern for classifying Edge Shift samples. FIG. 5A illustrates a horizontal pattern (class EO = 0), FIG. 5B illustrates a vertical pattern (class EO = 1), FIG. 5C illustrates a diagonal pattern of 135 ° (class EO = 2) and FIG. 5D illustrates a 45 ° diagonal pattern (class EO = 3) .0 EO is described in detail in Fu et al., Sample adaptive offset in the HEVC Petition 870190069676, of 07/23/2019, p. 50/101 45/71 standart, IEEE Trans. Syst Circuits. Video Technol., 22 (12): 1755-1764 (2012). [0094] According to the selected EO standard, five categories indicated by edgeldx in Table 1 are additionally defined. For edgeldx equal to 0 ~ 3, the magnitude of an offset can be signaled while the indicator of sign is implicitly encoded, this It's, The displacement negative for edgeldx equal to 0 or 1 and The displacement positive for edgeldx equal to 2 or 3. For edgeldx equal to 4, the offset is always set to 0, which means that no operation is required for this case. Table 1: classification for EO Category (edgeldx) Condition 0 c <a && c <b 1 (c <a && c == b) | (c == a && c <b) 2 (c> a && c == b) | (c == a && c> b) 3 c> a && c> b 4 None of the above [0095] For BO, the sample classification is based on sample values. Each color component can have its own SAO parameters. The BO implies that an offset is added to all samples in the same band. The variation in sample values is equally divided into 32 bands. For 8-bit samples ranging from 0 to 255, the bandwidth is 8, and the sample values from 8k to 8k + 7 belong to the k band, where k ranges from 0 to 31. The average difference between the samples originals and the samples reconstructed in a band (that is, displacement Petition 870190069676, of 07/23/2019, p. 51/101 46/71 of a band) is signaled to the decoder. There may be no restriction on displacement signals. The offsets of four consecutive bands (and in some examples, only offsets of four consecutive bands) and the position of the starting band can be signaled to the decoder. [0096] To reduce the side information, several CTUs can be merged (by copying the parameters of the CTU above (by setting sao_merge_left_flag equal to 1) or leaving the CTU (by setting sao_merge_up_flag equal to 1) to share SAO parameters). [00 97] In addition to the modified DB and HEVC SAO methods, JEM included another filtering method, called Adaptive Loop Filtering based on Geometry Transform (GALF). GALF aims to improve the coding efficiency of the ALF studied in the HEVC stage, by introducing several new aspects. ALF aims to minimize the mean square error between original samples and decoded samples using the adaptive filter based on Wiener. The samples in an image are classified into several categories and the samples in each category are then filtered with their associated adaptive filter. The filter coefficients can be signaled or inherited to optimize the compensation between the mean square error and the overhead. A GALF scheme can additionally improve the performance of ALF, which introduces geometric transforms, such as rotation, diagonal and vertical turn, to be applied to samples in the filter support region depending on the orientation of the sample gradient Petition 870190069676, of 07/23/2019, p. 52/101 47/71 rebuilt before ALF. The entry for ALF / GALF is the image reconstructed after invoking SAO. [00 98] GALF was proposed in Karczewicz et al., EE2.5: Improvements on adaptive loop filter, Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO / IEC JTC 1 / SC 29 / WG 11 , Doc. JVET-B0060, 2 nd Meeting: San Diego, USA, 20 Feb - 26 Feb 2016, and Karczewicz et al., EE2.5: Improvements on adaptive loop filter, ITU-T SG Exploration Team (JVET) 16 WP 3 and ISO / IEC JTC 1 / SC 29 / WG 11, doc. JVET-C0038, 3 nd Meeting: Geneva, CH, 26 May - 1 June 2016, ALF based on geometric transformations (GALF). GALF was adopted for the latest version of JEM, ie JEM3.0. In GALF, the classification is modified with the diagonal gradients considered and the geometric transformations could be applied to the filter coefficients. Each 2 x 2 block is categorized into one of 25 classes based on its directionality and quantized activity value. Details are described in the following subsections. [0099] As described in C. Tomasi and R. Manduchi, Bilateral filtering for gray and color images, in Proc. IEEE ICCV, Bombay, India, January 1998, Bilateral filtering can help prevent unwanted excess pixel smoothing at the edge. The main idea of bilateral filtering is that the weighting of neighboring samples takes into account the pixel values themselves to weigh more pixels with similar luminance or chrominance values. A sample located at (i, j) is filtered using its neighbor sample (k, 1). The weight ω (í, j, k, 1) is the weight assigned to the sample (k, 1) for Petition 870190069676, of 07/23/2019, p. 53/101 48/71 filter the sample (i, w (i, j, k, 1) - e * ' j) and is defined as: 0) In equation (1) above, I (i, j) and I (k, 1) are the intensity values of the samples (i, j) and (k, 1) respectively, Od is the spatial parameter, and v is the parameter variation. The definitions of the spatial parameter and the variation parameter are provided below. The filtering process with the filtered sample value called Id (i, j) can be defined according to equation (2) below. . r _ W (M) * w (ij, k, í) 'DVíj) ~ J><G) Σ & .ί k> 0 [00100] The properties (or strength) of the bilateral filter are controlled by these two parameters. Samples located closer to the sample to be filtered, and samples with less difference in intensity for the sample to be filtered, will have greater weight than samples more distant and with greater difference in intensity. [00101] As described in Jacob Strom et al., Bilateral filter after inverse transform, JVET-D0069, 4 nd Meeting: Chengdu, CN, 15-21 October 2016 (hereinafter, JVET-D0069), each sample reconstructed in the unit (TU) filter is filtered using only its direct reconstructed neighbor samples. The filter has a filter opening with a plus sign format centered on the sample to be filtered, as shown in Petition 870190069676, of 07/23/2019, p. 54/101 49/71 FIG. 6. FIG. 6 is a conceptual diagram illustrating the current block 600 that includes the current sample 602 and neighboring samples 604 through 610 used in the bilateral filtering process. The spatial parameter (ie Od) can be defined based on the size of the transform unit and the variation parameter (ie, Or) can be defined based on the QP used for the current block 400. Equations (3) and ( 4) provide an example of how spatial and variation parameters can be determined. ininíTU block width, TU block height) = 0.92 - ----------------—---------------- (, q 17) healthy ........................., 0.01) (4) [00102] As described in Jacob Strom et al., Bilateral filter strength based on prediction mode, JVETE0032, 5 nd Meeting: Geneva, CH, 12-20 January 2017 (hereinafter, JVET-E0032), to further reduce the loss of coding under short delay setting, the filter strength is additionally designed to depend on the coded mode. For intra coded blocks, the above equation (3) is still used. As for inter coded blocks, the following equation (5) is applied. min (TU block width, TU block height) (¾ = 0.72 --—----------- 1 - v 5 ) [00103] Note that the proposed bilateral filtering method can only be applied to luma blocks with at least a non-zero coefficient. For chroma blocks and luma blocks with all zero coefficients, the bilateral filtering method can always be Petition 870190069676, of 07/23/2019, p. 55/101 50/71 disabled. [00104] For samples located in the upper and left limits of the TU (ie, upper row and left column), only neighboring samples within the current TU are used to filter the current sample. FIG. 7 is a conceptual diagram illustrating how neighboring samples within a current TU (for example, a T4 4x4) can be used to filter a current sample. FIG. 7 illustrates the current TU 700 as including the current sample 700 and neighboring samples 704 through 710. As shown in FIG. 7, the left neighboring sample 710 of the current sample 702 is not included in the current TU 700. Therefore, the left neighboring sample 710 cannot be used in the filtering process of the current sample 702. [00105] The Filter Unit 216 can apply a bilateral filter according to the techniques of this disclosure. For example, filter unit 216 can apply a bilateral filter to reconstructed samples from a current block generated by reconstruction unit 214 according to equation (2) above. After applying the bilateral filter to the reconstructed samples of the current block, the filter unit 216 can store a filtered version of the current block in the decoded image buffer 218. The filtered version of the current block can be used as a reference image in the encoding of another image of the video data, as described elsewhere in this disclosure. [00106] The bilateral filtering design on JVET-D0069 and JVET-E0032 may have the following possible problems. In particular, the bilateral filter is applied right after the reconstruction of a block. Therefore, the Petition 870190069676, of 07/23/2019, p. 56/101 51/71 video encoder 20 may have to wait until the filtering process for a current block is completed for the next neighboring block to be encoded. Such a project can decrease the performance of the pipeline, which may be undesirable. [00107] The techniques of this disclosure can address the potential problem mentioned above. Some of the proposed techniques can be combined. The proposed techniques can be applied to other in loop filtering methods that depend on certain known information to implicitly derive adaptive filter parameters, or filters with explicit parameter signaling. [00108] According to one or more techniques of this disclosure, the filter unit 216 can selectively filter samples from a current block so that the filtering does not prevent parallel processing of neighboring blocks. For example, filter unit 216 can categorize samples from the current block as being filtered or not to be filtered and only perform bilateral filtering on samples categorized as being filtered (ie, filter unit 216 can refrain from filtering bilaterally samples categorized as not to be filtered). In this way, the filter unit 216 can still obtain some of the benefits of filtering while still being able to process neighboring blocks in parallel. [00109] Filter unit 216 can categorize samples from the current block as being filtered or not being filtered in various ways. As an example, filter unit 216 can perform categorization based on whether samples can be used for Petition 870190069676, of 07/23/2019, p. 57/101 52/71 predict samples from neighboring blocks. As another example, filter unit 216 can perform categorization based on whether the samples are located in a predefined region of the current block. As another example, filter unit 216 can perform categorization based on whether the samples are actually used to predict neighboring blocks. [00110] FIG. 8 is a conceptual diagram illustrating an example of how samples can be categorized, according to one or more techniques of the present disclosure. As shown in FIG. 8, the image 800 includes the current block 810, the lower neighbor block 820 and the right neighbor block 830. [00111] As discussed above, filter unit 216 can categorize the samples in the current block based on whether the samples can be used to predict samples in neighboring blocks (for example, in intra prediction or LM mode). For example, filter unit 216 can categorize, as to not be filtered, all samples in the current block that could possibly be used by one or more neighboring blocks for intra prediction without assessing whether the samples are / will actually be used for intra prediction . To illustrate, if a first sample of the current block can be used by neighboring blocks for intra prediction, the filter unit 216 can categorize the first sample as not to be filtered and refrain from performing bilateral filtering on the first sample. On the other hand, if a second sample from the current block cannot be used by neighboring blocks for intra prediction, filter unit 216 can Petition 870190069676, of 07/23/2019, p. 58/101 53/71 categorize the second sample as to be filtered and perform bilateral filtering on the second sample. In some instances, the filter unit 216 may determine that the samples located in the rightmost column or in the bottom row of the current block (assuming the scan order by horizontal tracking, it is understood that the rightmost column and the bottom row are interpreted as column / main row and that the other columns / rows can be used with other scan orders) can be used by neighboring blocks for intra prediction. For example, in the example of FIG. 8, the filter unit 216 can categorize samples in the rightmost column 812 and samples in the lower row 814 as not to filter because it is possible for neighboring blocks 820 and 830 to use the samples in the rightmost column 812 and the samples in the lower row 814 for intra prediction. [00112] As discussed above, filter unit 216 can categorize samples from the current block based on whether the samples are located in a predefined region of the current block. This technique may be similar and may overlap in some circumstances, to the categorization based on whether the samples can be used by neighboring blocks for intra prediction. For example, the predefined region of the current block can include the rightmost column and the bottom row of the current block. [00113] As discussed above, filter unit 216 can perform categorization based on whether the samples are actually used to predict neighboring blocks. To determine which samples from the current block are used by neighboring blocks, the filter unit Petition 870190069676, of 07/23/2019, p. 59/101 54/71 216 can determine, based on the information received from the mode selection unit 202, whether the neighboring blocks of the current block are encoded with the intra mode. In response to determining that a right neighboring block (for example, block 830) of the current block is coded using intra prediction, filter unit 216 can determine that samples from the current block that are located in a rightmost column (for example , the samples in column 812) of the current block are used by neighboring blocks for intra prediction. However, in response to determining that the right neighboring block (for example, block 830) of the current block is not coded using intra prediction (for example, it is coded using inter prediction), filter unit 216 can determine which samples of the block current block that are located in the rightmost column (for example, samples in column 812) of the current block are not used by neighboring blocks for intra prediction. Similarly, in response to determining that a lower neighboring block (e.g., block 820) of the current block is coded using intra prediction, filter unit 216 can determine which samples from the current block that are located in a lower row (for example, samples in row 814) of the current block are used by neighboring blocks for intra prediction. However, in response to the determination that the lower neighboring block (for example, block 820) of the current block is not coded using intra prediction, filter unit 216 can determine that samples from the current block that are located in the lower row ( for example, samples in row 814) of the current block are not used by neighboring blocks for prediction Petition 870190069676, of 07/23/2019, p. 60/101 55/71 intra. [00114] In some examples, as discussed above, video encoder 20 may use a linear cross-component model (CCLM) prediction mode to predict samples of video data. In CCLM, video encoder 20 can use luma samples from the entire block when performing the intra chroma prediction process of a chroma block. Thus, where a neighboring block of the current block depended on luma reconstruction samples (for example, if the neighboring block is encoded using CCLM), filter unit 216 can determine that all samples in the current block are actually used for prediction of neighboring blocks. In such examples, the filter unit 216 may refrain from performing bilateral filtration on any samples in the current block. [00115] When categorizing samples based on whether the samples can be used to predict samples from neighboring blocks or based on whether the samples are located in a predefined region of the current block, the filter unit 216 can actually avoid determining that, if any, samples from the current block are actually used to predict neighboring blocks. By not determining which samples from the current block are actually used to predict neighboring blocks, the filtering unit 216 can reduce the complexity of the filtering process. However, by determining which samples from the current block are actually used to predict neighboring blocks and only refrain from filtering the samples that are actually used, the filter unit 216 Petition 870190069676, of 07/23/2019, p. 61/101 56/71 can filter a larger number of samples, which can improve artifact quality / reduction. [00116] In some examples, unlike selectively filtering some samples from a current block, filter unit 216 can perform bilateral filtering on all samples in the current block and store two sets of reconstruction blocks / sub-blocks. For example, filter unit 216 can store a first set that includes non-bilaterally filtered samples from the current block and a second set that includes bilaterally filtered samples from the current block. In some examples, the second set may include samples that are filtered bilaterally, but not yet filtered by other in-loop filters, such as the unbundling filter. [00117] In some examples, the intra-prediction unit 226 can always use the first set to perform an intra-luma prediction process. In some examples, the intra prediction unit 226 may select the first set or the second set to perform intra lumen prediction of neighboring blocks based on information from the intra prediction mode. For example, if a neighboring block of the current block is encoded with PDPC or ARSS mode or the limit filter is enabled, the intra prediction unit 226 can select the first set for the intra block prediction process of the neighboring block. In some examples, if the chroma mode depends on luma reconstruction samples, for example, the cross component linear model (CCLM) prediction mode, the 226 intra prediction unit can use the first Petition 870190069676, of 07/23/2019, p. 62/101 57/71 set of the corresponding luma block when executing the intra chroma prediction process of a chroma block. [00118] Similarly, the filtering process for the reconstruction of a block / sub-block can be applied after all the intra prediction for the next coded block has been made. In this document, intra-prediction may include, but is not limited to: 1) traditional intra-normal prediction using random reconstructed samples, 2) cross-component linear model (CCLM) prediction. [00119] Video encoder 200 represents an example of a device configured to encode video data, the device including a memory configured to store video data (for example, the 218 decoded image buffer) and one or more configured processors to obtain reconstructed samples of a current block of video data; and selectively bilaterally filter the reconstructed samples from the current block to generate a current filtered block, wherein selectively filtering bilaterally filter the reconstructed samples from the current block comprises refraining from bilaterally filtering at least one reconstructed sample from the current block so that the block from current filtrate include at least one unfiltered sample bilaterally. [00120] FIG. 9 is a block diagram illustrating an illustrative video decoder 300 that can perform the techniques of this disclosure. The video decoder 300 represents an example of the video decoder 30 of FIG. 1, although other examples are possible. Petition 870190069676, of 07/23/2019, p. 63/101 58/71 FIG. 9 is provided for the purpose of explanation and is not limiting in the techniques as widely exemplified and described in this disclosure. For purposes of explanation, this disclosure describes the video decoder 300 described according to the techniques of JEM and HEVC. However, the techniques of this disclosure can be performed by video encoding devices that are configured to other video encoding standards. [00121] In the example of FIG. 9, the video decoder 300 includes the encoded image buffer (CPB) memory 320, the entropy decoding unit 302, the prediction processing unit 304, the reverse quantization unit 306, the reverse transform processing unit 308, the reconstruction unit 310, the filter unit 312 and the decoded image buffer (DPB) 314. The prediction processing unit 304 includes the motion compensation unit 316 and the intra prediction unit 318. The prediction processing 304 may include addition units to perform prediction according to other prediction modes. As examples, the prediction processing unit 304 may include a palette unit, an intra-block copy unit (which may form part of the motion compensation unit 316), an affine unit, a linear model unit (LM ), among others. In other examples, the video decoder 300 may include more, less or different functional components. [00122] CPB 320 memory can store video data, such as an encoded video bit stream, to be decoded by the decoder components Petition 870190069676, of 07/23/2019, p. 64/101 59/71 of video 300. Video data stored in CPB memory 320 can be obtained, for example, from storage medium 28 (FIG. 1). CPB memory 320 may include a CPB that stores encoded video data (e.g., syntax elements) from an encoded video bit stream. In addition, CPB memory 320 can store video data other than syntax elements of an encoded image, such as temporary data representing outputs from the various video decoder units 300. The DPB 314 generally stores decoded images, which the video decoder 300 can output and / or use as reference video data when decoding subsequent data or images from the encoded video bit stream. CPB 320 and DPB 314 can be formed by any of a variety of memory devices, such as dynamic random access memory (DRAM), including synchronous DRAM (SDRAM), magneto-resistive RAM (MRAM), resistive RAM (RRAM) or other types of memory devices. CPB 320 and DPB 314 can be provided by the same memory device or by separate memory devices. In several instances, CPB memory 320 may be on the chip with other components of the video decoder 300, or off the chip in relation to those components. [00123] The various units shown in FIG. are illustrated to assist in understanding the operations performed by the video decoder 300. The units can be implemented as fixed function circuits, programmable circuits or a combination thereof. Similar to FIG. 2, fixed function circuits refer to Petition 870190069676, of 07/23/2019, p. 65/101 60/71 circuits that provide particular functionality and are predefined in the operations that can be performed. Programmable circuits refer to circuits that can be programmed to perform various tasks and provide flexible functionality in the operations that can be performed. For example, programmable circuits may run software or firmware that cause programmable circuits to operate in the manner defined by the instructions in the software or firmware. Fixed-function circuits can execute software instructions (for example, to receive parameters or issue parameters), but the types of operations that fixed-function circuits perform are generally immutable. In some examples, one or more of the units can be separate circuit blocks (fixed or programmable function), and in some examples, the one or more units can be integrated circuits. [00124] The video decoder 300 can include ALUs, EFUs, digital circuits, analog circuits and / or programmable cores formed from programmable circuits. In examples where video decoder operations 300 are performed by software running on programmable circuits, memory on or off the chip can store instructions (for example, object code) from the software that video decoder 300 receives and executes. [00125] The entropy decoding unit 302 can receive encoded video data from the CPB and entropy decode the video data to reproduce the syntax elements. The prediction processing unit 304, the quantization unit Petition 870190069676, of 07/23/2019, p. 66/101 61/71 reverse 306, reverse transform processing unit 308, reconstruction unit 310 and filter unit 312 can generate decoded video data based on the syntax elements extracted from the bit stream. [00126] In general, the video decoder 300 reconstructs an image on a block-by-block basis. The video decoder 300 can perform a reconstruction operation on each block individually (where the block currently being reconstructed, i.e., decoded, can be referred to as a current block). [00127] The entropy decoding unit 302 can entropy decode syntax elements by defining quantized transform coefficients from a block of quantized transform coefficients, as well as the transform information, such as a quantization parameter (QP) and / or indication (indications) of transformed mode. The inverse quantization unit 306 can use the QP associated with block of quantized transform coefficients to determine a degree of quantization and, likewise, a degree of inverse quantization for the inverse quantization unit 306 to be applied. The inverse quantization unit 306 can, for example, perform a bitwise left shift operation to inversely quantize the quantized transform coefficients. The inverse quantization unit 306 can thus form a block of transform coefficients including transform coefficients. [00128] After the inverse quantization unit 306 forms the block of transform coefficients, the Petition 870190069676, of 07/23/2019, p. 67/101 62/71 inverse transform processing unit 308 can apply one or more inverse transforms to the transform coefficient block to generate a residual block associated with the current block. For example, the reverse transform processing unit 308 can apply an inverse DCT, an integer inverse transform, a Karhunen-Loeve inverse transform (KLT), an inverse rotational transform, an inverse directional transform, or another inverse transform for the coefficient block. [00129] Additionally, the prediction processing unit 304 generates a prediction block according to elements of prediction syntax that have been decoded by entropy by the entropy decoding unit 302. For example, if the information syntax elements prediction blocks indicate that the current block is predicted inter, the motion compensation unit 316 can generate the prediction block. In this case, the prediction information syntax elements can indicate a reference image in DPB 314 from which to retrieve a reference block, as well as a motion vector identifying a reference block location in the reference image relative to the location of the current block in the current image. The motion compensation unit 316 can generally perform the inter prediction process in a manner that is substantially similar to that described in relation to the motion compensation unit 224 (FIG. 2). [00130] As another example, if the syntax elements of prediction information indicate that the current block Petition 870190069676, of 07/23/2019, p. 68/101 63/71 is predicted intra, the intra prediction unit 318 can generate the prediction block according to an intra prediction mode indicated by the syntax elements of prediction information. Again, the intra prediction unit 318 can generally perform the intra prediction process in a manner that is substantially similar to that described in respect to the intra prediction unit 226 (FIG. 2). The intra 318 prediction unit can retrieve data from neighboring samples for the current block from DPB 314. [00131] The reconstruction unit 310 can reconstruct the current block using the prediction block and the residual block. For example, reconstruction unit 310 can add samples from the residual block to corresponding samples from the prediction block to reconstruct the current block. [00132] Filter unit 312 can perform one or more filter operations on reconstructed blocks. For example, filter unit 312 can perform ungrouping operations to reduce cluster artifacts along the edges of the reconstructed blocks. As illustrated by dashed lines, the operations of the filter unit 312 are not necessarily performed in all examples. [00133] Filter unit 312 can generally perform a filtering process on a material that is substantially similar to that described in relation to filter unit 216 (Figure 1). For example, filter unit 312 can selectively filter samples from a current block, so that filtering does not prevent parallel processing of neighboring blocks. For example, Petition 870190069676, of 07/23/2019, p. 69/101 64/71 filter unit 312 can categorize samples from the current block as being filtered or not to be filtered and only perform bilateral filtering on samples categorized as being filtered (ie, filter unit 312 can refrain from filtering bilaterally samples categorized as not to be filtered). In this way, filter unit 312 can still obtain some of the benefits of filtering while still being able to process neighboring blocks in parallel. [00134] The video decoder 300 can store the reconstructed blocks in the DPB 314. For example, the filter unit 312 can store the reconstructed blocks filtered in the DPB 314. As discussed above, the DPB 314 can provide reference information, such as samples of a current image for intra prediction and previously decoded images for subsequent motion compensation, for the prediction processing unit 304. In addition, the video decoder 300 can output the decoded images from the DPB for subsequent display on a device video, such as video device 32 of FIG. 1 [00135] FIG. 10 is a flow chart illustrating an illustrative process for filtering a reconstructed block of video data, according to one or more techniques of this disclosure. For purposes of explanation, the method of FIG. 10 is described below as being executed by the 30/300 video decoder and its components (for example, illustrated in FIGS. 1 and 9), although the method of FIG. 10 can be performed by other video decoders or video encoders. For example, the method of Petition 870190069676, of 07/23/2019, p. 70/101 65/71 FIG. 10 can be performed by the video encoder 20/200 (for example, illustrated in FIGS. 1 and 2). [00136] The video decoder 30 can reconstruct samples from a current block of video data (1002). For example, the reconstruction unit 310 can add samples from a residual block (generated by the reverse transform processing unit 308) to corresponding samples from a prediction block (generated by the prediction processing unit 304) to reconstruct the samples from the block current. [00137] The video decoder 30 can categorize samples from the current block as either to be filtered or not to be filtered (1004). As discussed above, the filter unit 216 can categorize the samples in the current block as either to be filtered or not to be filtered in various ways. As an example, filter unit 216 can perform categorization based on whether samples can be used to predict samples from neighboring blocks. As another example, filter unit 216 can perform categorization based on whether the samples are located in a predefined region of the current block. As another example, filter unit 216 can perform categorization based on whether the samples are actually used to predict neighboring blocks. In some instances, categorizing a sample can be interpreted as determining whether to filter. For example, filter unit 216 can categorize a particular sample by determining whether or not to filter the particular sample and does not need to assign a value to any attribute or variable for the particular sample. Petition 870190069676, of 07/23/2019, p. 71/101 66/71 [00138] The video decoder 30 can filter samples from the current block that are categorized to be filtered (1006). For example, filter unit 216 can perform a bilateral filtering process on each sample categorized as to be filtered according to equation (2) above. In particular, filter unit 216 can replace each categorized sample in order to be filtered with a weighted average of itself and its neighbors. [00139] The video decoder 30 can store the filtered samples of the current block (1008). For example, filter unit 216 can store a current filtered block (which includes the filtered samples from the current block along with unfiltered samples categorized as not to be filtered) in the decoded image buffer 314. In addition, the video decoder 30 can output decoded images from the DPB for subsequent display on a video device, such as the video device 32 of FIG. 1. [00140] Some aspects of this disclosure have been described with respect to video coding standards for the purpose of illustration. However, the techniques described in this disclosure can be useful for other video encoding processes, including other standard or proprietary video encoding processes not yet developed. [00141] The techniques described above can be performed by the video encoder 200 and / or the video decoder 120, both of which can generally be referred to as a video encoder. Likewise, the Petition 870190069676, of 07/23/2019, p. 72/101 67/71 video encoding can refer to video encoding or video decoding, as applicable. [00142] It should be understood that all the techniques described in this document can be used individually or in combination. This disclosure includes several signaling methods that can change depending on certain factors such as block size, palette size, fraction type, etc. Such variation in signaling or inferring the syntax elements can be known for the a priori encoder and decoder or can be explicitly signaled in the video parameter set (VPS), in the sequence parameter set (SPS), in the parameter set (PPS), in the fraction header, at a tile level, or elsewhere. [00143] It is to be recognized that depending on the example, some acts or events of any of the techniques described in this document can be performed in a different sequence, can be added, merged or set aside (for example, not all acts or described events are necessary for the practice of the techniques). In addition, in some instances, acts or events can be performed simultaneously, for example, through multi-threaded processing, interrupt processing or the various processors instead of sequentially. Additionally, while some aspects of this disclosure are described as being performed by a single module or unit for the sake of clarity, it should be understood that the techniques of this disclosure can be performed by a combination of units or modules Petition 870190069676, of 07/23/2019, p. 73/101 68/71 associated with a video encoder. . [00144] Although particular combinations of various aspects of the techniques are described above, these combinations are provided merely to illustrate examples of the techniques described in this disclosure. Consequently, the techniques of this disclosure should not be limited to these illustrative combinations and may cover any conceivable combination of the various aspects of the techniques described in this disclosure. [00145] In one or more examples, the functions described can be implemented in hardware, software, firmware or any combination thereof. If implemented in software, functions can be stored or transmitted, as one or more instructions or code, through a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media can include computer-readable storage media, which corresponds to a tangible medium such as data storage media or communication media, including any medium that facilitates the transfer of a computer program from a computer. place to another, for example, according to a communication protocol. In this way, the computer-readable media can generally correspond to (1) tangible computer-readable storage media which is not temporary or (2) a communication medium such as a signal or carrier wave. The data storage media can be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, codes and / or data structures for Petition 870190069676, of 07/23/2019, p. 74/101 69/71 implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium. [00146] By way of example and not by way of limitation, such non-temporary, computer-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory or any other means that can be used to store the desired program code in the form of instructions or data structures and that can be accessed by a computer. In addition, any connection is appropriately termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or non-wired technologies such as infrared, radio and microwave, then coaxial cable, fiber optic cable, twisted pair, DSL, or non-wired technologies such as infrared, radio and microwave are included in the definition of medium. However, it should be understood that computer-readable storage media and data storage media do not include connections, carrier waves, signals or other temporary media, instead they are directed to non-temporary, tangible storage media. Magnetic disc and optical disc, as used in this document, include compact optical disc (CD), laser optical disc, optical disc, digital versatile optical disc (DVD), flexible optical disc and Blu-ray optical disc where magnetic discs Petition 870190069676, of 07/23/2019, p. 75/101 70/71 normally reproduce data magnetically, while optical discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. [00147] Instructions can be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrangements (FPGAs), or other integrated equivalents or discrete logic circuits. Accordingly, the term processor, as used in this document, can refer to any of the foregoing structures or to any other structure suitable for implementing the techniques described in this document. In addition, in some respects, the functionality described in this document may be provided within dedicated hardware and / or software modules configured to encode and decode, or incorporated into a combined codec. In addition, the techniques could be fully implemented in one or more circuits or logic elements. [00148] The techniques of this disclosure can be implemented in a wide variety of devices or devices, including an unwired handset, an integrated circuit (IC) or a set of ICs (for example, a chip set). Various components, modules or units are described in this disclosure to emphasize the functional aspects of devices configured to perform the techniques disclosed, but do not necessarily require realization by different hardware units. Petition 870190069676, of 07/23/2019, p. 76/101 71/71 On the contrary, as described above, several units can be combined into one codec hardware unit or provided by a collection of interoperable hardware units, including one or more processors as described above, together with suitable software and / or firmware . [00149] Several examples have been described. These and other examples are within the scope of the following claims.
权利要求:
Claims (26) [1] 1. Method of filtering a reconstructed block of video data, the method comprising: obtain, by one or more processors, reconstructed samples of a current block of video data; and selectively filter, by one or more processors, the reconstructed samples from the current block to generate a current filtered block, where selectively filtering the reconstructed samples from the current block comprises refraining from filtering at least one reconstructed sample from the current block so that the current filtered block includes at least one unfiltered sample and at least one filtered sample. [2] A method according to claim 1, wherein selectively filtering comprises selectively filtering the reconstructed samples bilaterally. [3] 3. Method according to claim 2, in which bilaterally filtering a particular sample comprises replacing a value of the particular sample with a weighted average of the value of the particular sample and with values of the neighboring samples above, below, left and right of particular sample, and where selectively filtering reconstructed samples from the current block comprises bilaterally filtering at least one reconstructed sample from the current block, so that the current filtered block includes at least one bilaterally filtered sample. [4] 4. Method, according to claim 1, in selectively filtering the reconstructed samples from the block Petition 870190069676, of 07/23/2019, p. 78/101 Current 2/9 comprises: categorize, by one or more processors, the reconstructed samples of the current block as to be filtered or not to be filtered, where selectively filtering comprises: filter reconstructed samples from the current block that are categorized as being filtered; and refrain from filtering reconstructed samples from the current block that are categorized as not being filtered. [5] 5. Method according to claim 4, in which categorizing the reconstructed samples comprises: determine which samples from the current block are used to predict at least one neighboring block; categorize the reconstructed samples from the current block that are used to predict at least one neighboring block as not to be filtered; and categorize reconstructed samples from the current block that are not used to predict at least one neighboring block as being filtered. [6] 6. Method according to claim 5, in which determining which samples of the current block are used to predict at least one neighboring block comprises: in response to determining that a right neighboring block of the current block is coded using intra prediction, determining that samples from the current block that are located in a rightmost column of the current block are used to predict at least one neighboring block; and Petition 870190069676, of 07/23/2019, p. 79/101 3/9 in response to determining that a neighboring block below the current block is coded using intra prediction, determining that samples from the current block that are located in a lower row of the current block are used to predict at least one neighboring block. [7] 7. Method according to claim 5, in which the current block comprises a current chroma block and a current luma block, and in which determining which samples of the current block are used to predict at least one neighboring block comprises: in response to determining that a neighboring block of the current chroma block or a corresponding chroma block of the current luma block is encoded using a cross-component linear model (CCLM) prediction mode, determine that all samples in the current block are used for prediction of at least one neighboring block. [8] 8. Method according to claim 4, in which to categorize the reconstructed samples comprises: categorize the reconstructed samples of the current block that can be used by neighboring blocks for intra prediction as not to be filtered; and Categorize the reconstructed samples from the current block that cannot be used by neighboring blocks for intra prediction as to be filtered. [9] 9. The method according to claim 4, wherein categorizing the reconstructed samples comprises: Categorize the reconstructed samples from the current block that are located in a predefined region of the current block as not to be filtered; and Petition 870190069676, of 07/23/2019, p. 80/101 4/9 categorize the reconstructed samples of the current block which are not located in the predefined region of the current block as to be filtered. [10] 10. The method of claim 9, wherein the predefined region includes a rightmost column of samples from the current block and a lower row of samples from the current block. [11] 11. Method according to claim 1, the method being executable on a communication device not wired, wherein device comprises: an memory configured to store the data of video; and one receptor configured to receive the data from video and store video data in memory. [12] 12. The method of claim 11, wherein the non-wired communication device is a cell phone and the video data is received by a receiver and modulated according to a cell communication standard. [13] 13. Apparatus for filtering a reconstructed block of video data, the apparatus comprising: a memory configured to store video data; and one or more processors configured to: obtain reconstructed samples of a current block of video data; and selectively filter the reconstructed samples from the current block to generate a current filtered block, where, to selectively filter the reconstructed samples from the current block, the one or more processors Petition 870190069676, of 07/23/2019, p. 81/101 5/9 are configured to refrain from filtering at least one reconstructed sample from the current block so that the current filtered block includes at least one unfiltered sample and at least one filtered sample. [14] Apparatus according to claim 13, in which, to selectively filter, the one or more processors are configured to selectively filter reconstructed samples bilaterally. [15] Apparatus according to claim 13, in which, to bilaterally filter a particular sample, the one or more processors are configured to replace a particular sample value with a weighted average of the particular sample value of the neighboring samples above, below , to the left and right of the particular sample, and where, to selectively bilaterally filter the reconstructed samples from the current block, the one or more processors are configured to bilaterally filter at least one reconstructed sample from the current block so that the current block is filtered include at least one bilaterally filtered sample. [16] 16. Apparatus according to claim 13, in which, to selectively filter the reconstructed samples from the current block, the one or more processors are configured to: Categorize the reconstructed samples of the current block as to be filtered or not to be filtered; filter the reconstructed samples from the current block that are categorized as to be filtered; and refrain from filtering reconstructed samples from the Petition 870190069676, of 07/23/2019, p. 82/101 6/9 current block that are categorized as not to be filtered. [17] 17. Apparatus according to claim 16, in which, to categorize the reconstructed samples, the one or more processors are configured to: determine which samples from the current block are used to predict at least one neighboring block; categorize the reconstructed samples of the current block that are used to predict at least one neighboring block as not to be filtered; and categorize the reconstructed samples from the current block that are not used to predict at least one neighboring block as being filtered. [18] 18. Apparatus according to claim 17, in which, to determine which samples of the current block are used to predict at least one neighboring block, the one or more processors are configured to: determine, in response to determining that a neighboring block to the right of the current block is coded using intra prediction, that samples from the current block that are located in a rightmost column of the current block are used to predict at least one neighboring block; and determining, in response to determining that a neighboring block below the current block is coded using intra prediction, that samples from the current block that are located in a lower row of the current block are used to predict at least one neighboring block. [19] 19. Apparatus according to claim 17, in which the current block comprises a current chroma block and a Petition 870190069676, of 07/23/2019, p. 83/101 7/9 current luma block, and where, to determine which samples of the current block are used to predict at least one neighboring block, the one or more processors are configured to: determine, in response to determining that a neighboring block of the current chroma block or a corresponding chroma block of the current luma block is coded using a cross-component linear model (CCLM) prediction mode, that all samples in the current block are used for prediction of at least one neighboring block. [20] 20. Apparatus according to claim 16, in which, to categorize the reconstructed samples, the one or more processors are configured to: categorize the reconstructed samples of the current block which can be used by neighboring blocks for intra prediction as not to be filtered; and categorize the reconstructed samples from the current block that cannot be used by neighboring blocks for intra prediction as to be filtered. [21] 21. Apparatus according to claim 16, in which, to categorize the reconstructed samples, the one or more processors are configured to: categorize the reconstructed samples of the current block which are located in a predefined region of the current block as not to be filtered; and categorize the reconstructed samples from the current block that are not located in the predefined region of the current block as to be filtered. [22] 22. Apparatus according to claim 21, Petition 870190069676, of 07/23/2019, p. 84/101 8/9 where the predefined region includes a rightmost column of samples from the current block and a lower row of samples from the current block. [23] An apparatus according to claim 13, wherein the device is an un-wired communication device, further comprising: a receiver configured to receive a decodable bit stream to obtain the reconstructed samples. [24] An apparatus according to claim 23, wherein the non-wired communication device is a cellular telephone and the bit stream is received by the receiver and modulated according to a cellular communication standard. [25] 25. Apparatus for filtering a reconstructed block of video data, the apparatus comprising: middle for get samples reconstructed from a current data block of video; andmiddle for selectively filter the samples rebuilt from the current block to generate one current block filtered, in that the medium for selectively filter the Reconstructed samples from the current block are configured to refrain from filtering at least one reconstructed sample from the current block so that the current filtered block includes at least one unfiltered sample and at least one filtered sample. [26] 26. Computer-readable storage medium storing instructions, which when executed, cause one or more processors on a device to filter a reconstructed block of video data: get reconstructed samples from a current block Petition 870190069676, of 07/23/2019, p. 85/101 9/9 of the video data; and selectively filter the reconstructed samples from the current block to generate a current filtered block, where the instructions that cause one or more processors to selectively filter the reconstructed samples from the current block comprise instructions that cause the one or more processors to abstain- at least one reconstructed sample from the current block should be filtered, so that the current filtered block includes at least one unfiltered sample and at least one filtered sample.
类似技术:
公开号 | 公开日 | 专利标题 BR112019015106A2|2020-03-10|BILATERAL FILTERS IN VIDEO ENCODING WITH REDUCED COMPLEXITY TW201743619A|2017-12-16|Confusion of multiple filters in adaptive loop filtering in video coding TW201633787A|2016-09-16|Coding tree unit | level adaptive loop filter | TW202005399A|2020-01-16|Block-based adaptive loop filter | design and signaling US20190238845A1|2019-08-01|Adaptive loop filtering on deblocking filter results in video coding TW201904285A|2019-01-16|Enhanced deblocking filtering design in video coding US10555006B2|2020-02-04|Deriving bilateral filter information based on a prediction mode in video coding TW201838415A|2018-10-16|Determining neighboring samples for bilateral filtering in video coding US11044473B2|2021-06-22|Adaptive loop filtering classification in video coding US10721469B2|2020-07-21|Line buffer reduction for adaptive loop filtering in video coding TW202041024A|2020-11-01|Signalling for merge mode with motion vector differences in video coding WO2019200277A1|2019-10-17|Hardware-friendly sample adaptive offset | and adaptive loop filter | for video coding BR112021002990A2|2021-05-11|unlocking filter for video encoding and processing AU2018297286A1|2019-12-19|Division-free bilateral filter BR112019010547A2|2019-09-17|indication of using bilateral filter in video coding JP2019534631A|2019-11-28|Peak sample adaptive offset TW202041018A|2020-11-01|Predictive coefficient coding BR112021003869A2|2021-05-18|time prediction of adaptive loop filter parameters with reduced memory consumption for video encoding WO2020132094A1|2020-06-25|Adaptive loop filter | index signaling
同族专利:
公开号 | 公开日 KR20190110548A|2019-09-30| AU2018212665A1|2019-07-04| EP3574650A1|2019-12-04| SG11201905243YA|2019-08-27| CN110169064A|2019-08-23| US20180220130A1|2018-08-02| TW201832562A|2018-09-01| WO2018140587A1|2018-08-02| US10694181B2|2020-06-23| CN110169064B|2021-05-07|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 HU0301368A3|2003-05-20|2005-09-28|Amt Advanced Multimedia Techno|Method and equipment for compressing motion picture data| US20070171980A1|2006-01-26|2007-07-26|Yen-Lin Lee|Method and Related Apparatus For Decoding Video Streams| US7873224B2|2006-03-01|2011-01-18|Qualcomm Incorporated|Enhanced image/video quality through artifact evaluation| KR101369224B1|2007-03-28|2014-03-05|삼성전자주식회사|Method and apparatus for Video encoding and decoding using motion compensation filtering| US7894685B2|2008-07-01|2011-02-22|Texas Instruments Incorporated|Method and apparatus for reducing ringing artifacts| US8189943B2|2009-03-17|2012-05-29|Mitsubishi Electric Research Laboratories, Inc.|Method for up-sampling depth images| JP5183664B2|2009-10-29|2013-04-17|財團法人工業技術研究院|Deblocking apparatus and method for video compression| KR101681301B1|2010-08-12|2016-12-01|에스케이 텔레콤주식회사|Method and Apparatus for Encoding/Decoding of Video Data Capable of Skipping Filtering Mode| US20130177078A1|2010-09-30|2013-07-11|Electronics And Telecommunications Research Institute|Apparatus and method for encoding/decoding video using adaptive prediction block filtering| US9930366B2|2011-01-28|2018-03-27|Qualcomm Incorporated|Pixel level adaptive intra-smoothing| US20120236936A1|2011-03-14|2012-09-20|Segall Christopher A|Video coding based on edge determination| US9344729B1|2012-07-11|2016-05-17|Google Inc.|Selective prediction signal filtering| US9118932B2|2013-06-14|2015-08-25|Nvidia Corporation|Adaptive filtering mechanism to remove encoding artifacts in video data| US9924175B2|2014-06-11|2018-03-20|Qualcomm Incorporated|Determining application of deblocking filtering to palette coded blocks in video coding| US10321140B2|2015-01-22|2019-06-11|Mediatek Singapore Pte. Ltd.|Method of video coding for chroma components| MX2017014914A|2015-05-21|2018-06-13|Huawei Tech Co Ltd|Apparatus and method for video motion compensation.| EP3304906A4|2015-06-03|2019-04-17|MediaTek Inc.|Method and apparatus of error handling for video coding using intra block copy mode| KR102325395B1|2015-11-17|2021-11-10|후아웨이 테크놀러지 컴퍼니 리미티드|Method and apparatus of adaptive filtering of samples for video coding|US10638126B2|2017-05-05|2020-04-28|Qualcomm Incorporated|Intra reference filter for video coding| GB2567249A|2017-10-09|2019-04-10|Canon Kk|New sample sets and new down-sampling schemes for linear component sample prediction| US10958928B2|2018-04-10|2021-03-23|Qualcomm Incorporated|Decoder-side motion vector derivation for video coding| US10805624B2|2018-07-16|2020-10-13|Tencent America LLC|Determination of parameters of an affine model| US10819979B2|2018-09-06|2020-10-27|Tencent America LLC|Coupled primary and secondary transform| KR20210089132A|2018-11-06|2021-07-15|베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드|Intra prediction based on location| US11178396B2|2018-11-14|2021-11-16|Tencent America LLC|Constrained intra prediction and unified most probable mode list generation| WO2020108591A1|2018-12-01|2020-06-04|Beijing Bytedance Network Technology Co., Ltd.|Parameter derivation for intra prediction| AU2019391197A1|2018-12-07|2021-06-17|Beijing Bytedance Network Technology Co., Ltd.|Context-based intra prediction| WO2020130745A1|2018-12-21|2020-06-25|삼성전자 주식회사|Encoding method and device thereof, and decoding method and device thereof| WO2020139008A1|2018-12-28|2020-07-02|한국전자통신연구원|Video encoding/decoding method, apparatus, and recording medium having bitstream stored thereon| WO2020156532A1|2019-02-01|2020-08-06|Beijing Bytedance Network Technology Co., Ltd.|Restrictions on in-loop reshaping| KR20210121014A|2019-02-02|2021-10-07|베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드|Buffer initialization for intra block copying in video coding| CN113383543A|2019-02-02|2021-09-10|北京字节跳动网络技术有限公司|Prediction using extra buffer samples for intra block copy in video coding| EP3903482A1|2019-02-22|2021-11-03|Beijing Bytedance Network Technology Co. Ltd.|Neighbouring sample selection for intra prediction| KR20210116618A|2019-02-22|2021-09-27|엘지전자 주식회사|Video decoding method and apparatus based on CCLM prediction in video coding system| EP3903493A1|2019-02-24|2021-11-03|Beijing Bytedance Network Technology Co. Ltd.|Parameter derivation for intra prediction| CN111699681A|2019-06-25|2020-09-22|北京大学|Video image processing method, device and storage medium| WO2021141519A2|2020-05-26|2021-07-15|Huawei Technologies Co., Ltd.|Method and apparatus of high-level syntax for smoothing intra-prediction techniques|
法律状态:
2021-05-04| B11A| Dismissal acc. art.33 of ipl - examination not requested within 36 months of filing| 2021-07-20| B11Y| Definitive dismissal - extension of time limit for request of examination expired [chapter 11.1.1 patent gazette]| 2021-10-13| B350| Update of information on the portal [chapter 15.35 patent gazette]|
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US201762451555P| true| 2017-01-27|2017-01-27| US62/451,555|2017-01-27| US15/879,359|2018-01-24| US15/879,359|US10694181B2|2017-01-27|2018-01-24|Bilateral filters in video coding with reduced complexity| PCT/US2018/015206|WO2018140587A1|2017-01-27|2018-01-25|Bilateral filters in video coding with reduced complexity| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|